122 research outputs found

    Emotional expressions reconsidered: challenges to inferring emotion from human facial movements

    Get PDF
    It is commonly assumed that a person’s emotional state can be readily inferred from his or her facial movements, typically called emotional expressions or facial expressions. This assumption influences legal judgments, policy decisions, national security protocols, and educational practices; guides the diagnosis and treatment of psychiatric illness, as well as the development of commercial applications; and pervades everyday social interactions as well as research in other scientific fields such as artificial intelligence, neuroscience, and computer vision. In this article, we survey examples of this widespread assumption, which we refer to as the common view, and we then examine the scientific evidence that tests this view, focusing on the six most popular emotion categories used by consumers of emotion research: anger, disgust, fear, happiness, sadness, and surprise. The available scientific evidence suggests that people do sometimes smile when happy, frown when sad, scowl when angry, and so on, as proposed by the common view, more than what would be expected by chance. Yet how people communicate anger, disgust, fear, happiness, sadness, and surprise varies substantially across cultures, situations, and even across people within a single situation. Furthermore, similar configurations of facial movements variably express instances of more than one emotion category. In fact, a given configuration of facial movements, such as a scowl, often communicates something other than an emotional state. Scientists agree that facial movements convey a range of information and are important for social communication, emotional or otherwise. But our review suggests an urgent need for research that examines how people actually move their faces to express emotions and other social information in the variety of contexts that make up everyday life, as well as careful study of the mechanisms by which people perceive instances of emotion in one another. We make specific research recommendations that will yield a more valid picture of how people move their faces to express emotions and how they infer emotional meaning from facial movements in situations of everyday life. This research is crucial to provide consumers of emotion research with the translational information they require

    GANimation: one-shot anatomically consistent facial animation

    Get PDF
    The final publication is available at link.springer.comRecent advances in generative adversarial networks (GANs) have shown impressive results for the task of facial expression synthesis. The most successful architecture is StarGAN (Choi et al. in CVPR, 2018), that conditions GANs’ generation process with images of a specific domain, namely a set of images of people sharing the same expression. While effective, this approach can only generate a discrete number of expressions, determined by the content and granularity of the dataset. To address this limitation, in this paper, we introduce a novel GAN conditioning scheme based on action units (AU) annotations, which describes in a continuous manifold the anatomical facial movements defining a human expression. Our approach allows controlling the magnitude of activation of each AU and combining several of them. Additionally, we propose a weakly supervised strategy to train the model, that only requires images annotated with their activated AUs, and exploit a novel self-learned attention mechanism that makes our network robust to changing backgrounds, lighting conditions and occlusions. Extensive evaluation shows that our approach goes beyond competing conditional generators both in the capability to synthesize a much wider range of expressions ruled by anatomically feasible muscle movements, as in the capacity of dealing with images in the wild. The code of this work is publicly available at https://github.com/albertpumarola/GANimation.Peer ReviewedPostprint (author's final draft

    Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements

    Get PDF
    It is commonly assumed that a person’s emotional state can be readily inferred from his or her facial movements, typically called emotional expressions or facial expressions. This assumption influences legal judgments, policy decisions, national security protocols, and educational practices; guides the diagnosis and treatment of psychiatric illness, as well as the development of commercial applications; and pervades everyday social interactions as well as research in other scientific fields such as artificial intelligence, neuroscience, and computer vision. In this article, we survey examples of this widespread assumption, which we refer to as the common view, and we then examine the scientific evidence that tests this view, focusing on the six most popular emotion categories used by consumers of emotion research: anger, disgust, fear, happiness, sadness, and surprise. The available scientific evidence suggests that people do sometimes smile when happy, frown when sad, scowl when angry, and so on, as proposed by the common view, more than what would be expected by chance. Yet how people communicate anger, disgust, fear, happiness, sadness, and surprise varies substantially across cultures, situations, and even across people within a single situation. Furthermore, similar configurations of facial movements variably express instances of more than one emotion category. In fact, a given configuration of facial movements, such as a scowl, often communicates something other than an emotional state. Scientists agree that facial movements convey a range of information and are important for social communication, emotional or otherwise. But our review suggests an urgent need for research that examines how people actually move their faces to express emotions and other social information in the variety of contexts that make up everyday life, as well as careful study of the mechanisms by which people perceive instances of emotion in one another. We make specific research recommendations that will yield a more valid picture of how people move their faces to express emotions and how they infer emotional meaning from facial movements in situations of everyday life. This research is crucial to provide consumers of emotion research with the translational information they require

    Using the information embedded in the testing sample to break the limits caused by the small sample size in microarray-based classification

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Microarray-based tumor classification is characterized by a very large number of features (genes) and small number of samples. In such cases, statistical techniques cannot determine which genes are correlated to each tumor type. A popular solution is the use of a subset of pre-specified genes. However, molecular variations are generally correlated to a large number of genes. A gene that is not correlated to some disease may, by combination with other genes, express itself.</p> <p>Results</p> <p>In this paper, we propose a new classiification strategy that can reduce the effect of over-fitting without the need to pre-select a small subset of genes. Our solution works by taking advantage of the information embedded in the testing samples. We note that a well-defined classification algorithm works best when the data is properly labeled. Hence, our classification algorithm will discriminate all samples best when the testing sample is assumed to belong to the correct class. We compare our solution with several well-known alternatives for tumor classification on a variety of publicly available data-sets. Our approach consistently leads to better classification results.</p> <p>Conclusion</p> <p>Studies indicate that thousands of samples may be required to extract useful statistical information from microarray data. Herein, it is shown that this problem can be circumvented by using the information embedded in the testing samples.</p

    Usefulness of two independent DNA and rna tissue-based multiplex assays for the routine care of advanced NSCLC patients

    Get PDF
    Personalized medicine is nowadays a paradigm in lung cancer management, offering important benefits to patients. This study aimed to test the feasibility and utility of embedding two multiplexed genomic platforms as the routine workup of advanced non-squamous non-small cell lung cancer (NSCLC) patients. Two parallel multiplexed approaches were performed based on DNA sequencing and direct digital detection of RNA with nCounter® technology to evaluate gene mutations and fusions. The results were used to guide genotype-directed therapies and patient outcomes were collected. A total of 224 advanced non-squamous NSCLC patients were prospectively included in the study. Overall, 85% of samples were successfully characterized at DNA and RNA levels and oncogenic drivers were found in 68% of patients, with KRAS, EGFR, MET∆ex14, BRAF, and ALK being the most frequent (31%, 19%, 5%, 4%, and 4%, respectively). Among all patients with complete genotyping results and follow-up data (n = 156), the median overall survival (OS) was 1.90 years (confidence interval (CI) 95% 1.69-2.10) for individuals harbouring an actionable driver treated with a matched therapy, compared with 0.59 years (CI 95% 0.39-0.79) in those not eligible for any targeted therapy and 0.61 years (CI 95% 0.12-1.10) in patients with no drivers identified (p < 0.001). Integrating DNA and RNA multiplexing technologies into the routine molecular testing of advanced NSCLC patients is feasible and useful and highlights the necessity of widespread integrating comprehensive molecular diagnosis into lung cancer care
    corecore